dim_n_iter
:d
:(ls[d], var[d])
drawn uniformly from some distance around $D_0 = {ls[d]_0,var[d]_0}$(ls[d], var[d])
to be the dimentional parameter for d
The reason why the prior method works well: The optimization is quite poor as the method simply performs 2/1D hill climbing. This allows the graph learning to take precedence, which in most instances would allow the solution to converge to a much better value. In other words, the effect of graph learning on regret is much more important than the parameter, which is the reason why additive model is effective.
There is also another issue - the performance characteristics is poor - $O(d\times\text{dim_n_iter}\times\text{param_budget})$, even though it runs linearly in $d$ dimentions, it takes a lot of time as the inference takes $O(n^3)$. The full complexity would be:
$$ O(d\times\text{dim_n_iter}\times\text{param_budget}\times n^3) $$Which will be a problem when the iterations is large.
The gradients with respect to lengthscale is trival as it would follow the derivation for the RBF-ARD kernel.
Due to the our definition of the kernel scale parameter for a sub function that has variable group $G$: $$ \sigma_G = \sqrt{\sum_{i\in G} \sigma_i^2} $$
We know that: $$ \DeclareMathOperator{\Tr}{Tr} \frac{dL}{d\sigma_j} = \frac{1}{2}\Tr\Bigg[(\alpha\alpha^T-K^{-1})\sum_{G\in\mathsf{G}} \big(\frac{dk_G}{d\sigma_G}\frac{d\sigma_G}{d\sigma_j}\big) \Bigg] $$ And that: $$ \frac{dk_G}{d\sigma_G} = \frac{K}{\sigma_G} $$ Hence: $$ \frac{d\sigma_G}{d\sigma_j} = \frac{d}{d\sigma_j}\Bigg(\sqrt{\sum_{i\in G} \sigma_i^2}\Bigg) = \frac{\sigma_j}{\sigma_G} $$
Thereafter, we can just use the likelihood $L$ with the gradients $\frac{dL}{dp}$ where $p=[l_0, var_0, \cdots]$ and with some reasonable bounds on lengthscale $(1e-2, 1e5)$ and variance $(0.31622776601683794, 1e5)$. Bascially using any of the gradient based optimization with bounds:
Note that SLSQP was not tried as I think it is too simple and would not work well in higher dimensions.
We observe an increase in the amount of time required to run the experiments. This is to be expected as the amount of iterations is more than the prior method.
Performance was all over the map in these settings - see 27 May 2020 for LBFGS-B run. Some were equlivant and some were worse for regret. A closer analysis shows that the model is having problems learning a better graph structure.
The core issue is that the optimized kernel parameters is biasing too much too early towards a particular graph structure and causing the learning of the graph structure to be severely hindered, and stuck in local optima.
The solution would be to slowly increase the accuraccy of the kernel parameters. At early iteration, we would want the kernel parameters to be less effective and the later iterations to be more effective.
I have tried with both LBFGS and TNC:
factr
from 1e18 to 1e8 - this roughly correspond with accuracy.maxfunc
which is the maximum number of function evaluations from 1 to dim dim_n_iter 2I will discuss a little bit more when we meet!